50% More Professionals Rank Data Privacy as a Top GenAI Concern in 2024
Concerns over data privacy in relation to generative AI have surged, a new report from Deloitte has found. While last year only 22% of professionals ranked it among their top three concerns, this year the figure has risen to 72%.
The next highest ethical GenAI concerns were transparency and data provenance, with 47% and 40% of professionals ranking them in their top three this year. Meanwhile, only 16% expressed concern over job displacement.
Staff are becoming more curious about how AI technology operates, particularly with regards to the sensitive data. A September study by HackerOne found that nearly half of security professionals believe AI Is risky, with many seeing leaked training data as a threat.
Similarly, 78% of business leaders ranked “safe and secure” as one of their top three ethical technology principles, marking a 37% increase from 2023, further demonstrating how the issue of security is top of mind.
The survey results come from Deloitte’s 2024 “State of Ethics and Trust in Technology” report, which surveyed over 1,800 business and technical professionals worldwide about the ethical principles they apply to technologies, specifically GenAI.
High-profile AI security incidents are likely drawing more attention
Just over half of respondents to this year’s and last year’s reports said that cognitive technologies like AI and GenAI pose the biggest ethical risks compared with other emerging technologies, such as virtual reality, quantum computing, autonomous vehicles, and robotics.
This new focus may be related to a wider awareness of the importance of data security due to well-publicised incidents, such as when a bug in OpenAI’s ChatGPT exposed personal data of around 1.2% of ChatGPT Plus subscribers, including names, emails, and partial payment details.
Trust in the chatbot was also surely eroded by the news that hackers had breached an online forum used by OpenAI employees and stole confidential information about the firm’s AI systems.
SEE: Artificial Intelligence Ethics Policy
“Widespread availability and adoption of GenAI may have raised respondents’ familiarity and confidence in the technology, driving up optimism about its potential for good,” said Beena Ammanath, Global Deloitte AI Institute and Trustworthy AI leader, in a press release.
“The continued cautionary sentiments around its apparent risks underscores the need for specific, evolved ethical frameworks that enable positive impact.”
AI legislation is impacting how organisations operate worldwide
Naturally, more personnel are using GenAI at work than last year, with the percentage of professionals reporting that they use it internally rising by 20% in Deloitte’s year-over-year reports.
A huge 94% said their companies have instilled it into processes in some way. However, most indicated it is still in the pilot phase or use is limited, with only 12% saying it is in widespread use. This aligns with recent Gartner research that found most GenAI projects don’t make it past the proof-of-concept stage.
SEE: IBM: While Enterprise Adoption of Artificial Intelligence Increases, Barriers are Limiting Its Usage
Regardless of its pervasiveness, decision makers want to ensure that their use of AI does not get them into trouble, particularly when it comes to legislation. The highest rated reason for having ethical tech policies and guidelines in place was compliance, cited by 34% of respondents, while regulatory penalties was among the top three concerns reported if such standards are not followed.
The E.U. AI Act came into force on Aug. 1 and imposes strict requirements on high-risk AI systems to ensure safety, transparency, and ethical usage. Non-compliance could result in fines ranging from €35 million ($38 million USD) or 7% of global turnover to €7.5 million ($8.1 million USD) or 1.5% of turnover.
Over a hundred companies, including Amazon, Google, Microsoft, and OpenAI, have already signed the E.U. AI Pact and volunteered to start implementing the Act’s requirements ahead of legal deadlines. This both demonstrates their commitment to responsible AI deployment to the public and helps them avoid future legal challenges.
Similarly, in October 2023, the U.S. unveiled an AI Executive Order featuring wide-ranging guidance on maintaining safety, civil rights, and privacy within government agencies while promoting AI innovation and competition throughout the country. While it isn’t a law, many U.S.-operating companies may make policy changes in response to ensure compliance with evolving federal oversight and public expectations of AI safety.
SEE: G7 Countries Establish Voluntary AI Code of Conduct
The E.U. AI Act has had influence in Europe, with 34% of European respondents saying their organisations had made changes to their use of AI over the past year in response. However, the impact is more widespread, as 26% of South Asian respondents and 16% of North and South American respondents also made changes due to the Act’s instalment.
Furthermore, 20% of U.S.-based respondents said they had made changes at their organisations in response to the executive order. A quarter of South Asian respondents, 21% in South America, and 12% in Europe said the same.
“Cognitive technologies such as AI are recognized as having both the highest potential to benefit society and the highest risk of misuse,” the report’s authors wrote.
“The accelerated adoption of GenAI may be outpacing organizations’ capacity to govern the technology. Companies should prioritize both the implementation of ethical standards for GenAI and meaningful selection of use cases to which GenAI tools are applied.”
Source link